Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract A neural network (NN) surrogate of the NASA GISS ModelE atmosphere (version E3) is trained on a perturbed parameter ensemble (PPE) spanning 45 physics parameters and 36 outputs. The NN is leveraged in a Markov Chain Monte Carlo (MCMC) Bayesian parameter inference framework to generate a secondposteriorconstrained ensemble coined a “calibrated physics ensemble,” or CPE. The CPE members are characterized by diverse parameter combinations and are, by definition, close to top‐of‐atmosphere radiative balance, and must broadly agree with numerous hydrologic, energy cycle and radiative forcing metrics simultaneously. Global observations of numerous cloud, environment, and radiation properties (provided by global satellite products) are crucial for CPE generation. The inference framework explicitly accounts for discrepancies (or biases) in satellite products during CPE generation. We demonstrate that product discrepancies strongly impact calibration of important model parameter settings (e.g., convective plume entrainment rates; fall speed for cloud ice). Structural improvements new to E3 are retained across CPE members (e.g., stratocumulus simulation). Notably, the framework improved the simulation of shallow cumulus and Amazon rainfall while not degrading radiation fields, an upgrade that neither default parameters nor Latin Hypercube parameter searching achieved. Analyses of the initial PPE suggested several parameters were unimportant for output variation. However, many “unimportant” parameters were needed for CPE generation, a result that brings to the forefront how parameter importance should be determined in PPEs. From the CPE, two diverse 45‐dimensional parameter configurations are retained to generate radiatively‐balanced, auto‐tuned atmospheres that were used in two E3 submissions to CMIP6.more » « lessFree, publicly-accessible full text available April 1, 2026
- 
            null (Ed.)We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal-probability-mass intervals. And, whereas BC and KD each require optimal tuning of a hyper-parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.25–0.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper-parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel-type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log-Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as −10% and for BC as large as −50%. We speculate that estimating quantile locations, rather than bin-probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
